151 research outputs found
Projection image-to-image translation in hybrid X-ray/MR imaging
The potential benefit of hybrid X-ray and MR imaging in the interventional
environment is large due to the combination of fast imaging with high contrast
variety. However, a vast amount of existing image enhancement methods requires
the image information of both modalities to be present in the same domain. To
unlock this potential, we present a solution to image-to-image translation from
MR projections to corresponding X-ray projection images. The approach is based
on a state-of-the-art image generator network that is modified to fit the
specific application. Furthermore, we propose the inclusion of a gradient map
in the loss function to allow the network to emphasize high-frequency details
in image generation. Our approach is capable of creating X-ray projection
images with natural appearance. Additionally, our extensions show clear
improvement compared to the baseline method.Comment: In proceedings of SPIE Medical Imaging 201
Whole Slide Multiple Instance Learning for Predicting Axillary Lymph Node Metastasis
Breast cancer is a major concern for women's health globally, with axillary
lymph node (ALN) metastasis identification being critical for prognosis
evaluation and treatment guidance. This paper presents a deep learning (DL)
classification pipeline for quantifying clinical information from digital
core-needle biopsy (CNB) images, with one step less than existing methods. A
publicly available dataset of 1058 patients was used to evaluate the
performance of different baseline state-of-the-art (SOTA) DL models in
classifying ALN metastatic status based on CNB images. An extensive ablation
study of various data augmentation techniques was also conducted. Finally, the
manual tumor segmentation and annotation step performed by the pathologists was
assessed.Comment: Accepted for MICCAI DEMI Workshop 202
Projection-to-Projection Translation for Hybrid X-ray and Magnetic Resonance Imaging
Hybrid X-ray and magnetic resonance (MR) imaging promises large potential in interventional medical imaging applications due to the broad variety of contrast of MRI combined with fast imaging of X-ray-based modalities. To fully utilize the potential of the vast amount of existing image enhancement techniques, the corresponding information from both modalities must be present in the same domain. For image-guided interventional procedures, X-ray fluoroscopy has proven to be the modality of choice. Synthesizing one modality from another in this case is an ill-posed problem due to ambiguous signal and overlapping structures in projective geometry. To take on these challenges, we present a learning-based solution to MR to X-ray projection-to-projection translation. We propose an image generator network that focuses on high representation capacity in higher resolution layers to allow for accurate synthesis of fine details in the projection images. Additionally, a weighting scheme in the loss computation that favors high-frequency structures is proposed to focus on the important details and contours in projection imaging. The proposed extensions prove valuable in generating X-ray projection images with natural appearance. Our approach achieves a deviation from the ground truth of only 6% and structural similarity measure of 0.913 ± 0.005. In particular the high frequency weighting assists in generating projection images with sharp appearance and reduces erroneously synthesized fine details
Automated Volume Corrected Mitotic Index Calculation Through Annotation-Free Deep Learning using Immunohistochemistry as Reference Standard
The volume-corrected mitotic index (M/V-Index) was shown to provide
prognostic value in invasive breast carcinomas. However, despite its prognostic
significance, it is not established as the standard method for assessing
aggressive biological behaviour, due to the high additional workload associated
with determining the epithelial proportion. In this work, we show that using a
deep learning pipeline solely trained with an annotation-free,
immunohistochemistry-based approach, provides accurate estimations of
epithelial segmentation in canine breast carcinomas. We compare our automatic
framework with the manually annotated M/V-Index in a study with three
board-certified pathologists. Our results indicate that the deep learning-based
pipeline shows expert-level performance, while providing time efficiency and
reproducibility
Deep learning-based Subtyping of Atypical and Normal Mitoses using a Hierarchical Anchor-Free Object Detector
Mitotic activity is key for the assessment of malignancy in many tumors.
Moreover, it has been demonstrated that the proportion of abnormal mitosis to
normal mitosis is of prognostic significance. Atypical mitotic figures (MF) can
be identified morphologically as having segregation abnormalities of the
chromatids. In this work, we perform, for the first time, automatic subtyping
of mitotic figures into normal and atypical categories according to
characteristic morphological appearances of the different phases of mitosis.
Using the publicly available MIDOG21 and TUPAC16 breast cancer mitosis
datasets, two experts blindly subtyped mitotic figures into five morphological
categories. Further, we set up a state-of-the-art object detection pipeline
extending the anchor-free FCOS approach with a gated hierarchical
subclassification branch. Our labeling experiment indicated that subtyping of
mitotic figures is a challenging task and prone to inter-rater disagreement,
which we found in 24.89% of MF. Using the more diverse MIDOG21 dataset for
training and TUPAC16 for testing, we reached a mean overall average precision
score of 0.552, a ROC AUC score of 0.833 for atypical/normal MF and a mean
class-averaged ROC-AUC score of 0.977 for discriminating the different phases
of cells undergoing mitosis.Comment: 6 pages, 2 figures, 2 table
- …